32 research outputs found
Recommended from our members
SimTracker - Using the Web to track computer simulation results
Large-scale computer simulations, a hallmark of computing at Lawrence Livermore National Laboratory (LLNL), often take days to run and can produce massive amounts of output. The typical environment of many LLNL scientists includes multiple hardware platforms, a large collection of eclectic software applications, data stored on many devices in many formats, and little standard metadata, which is accessible documentation about the data. The exploration of simulation results typically proceeds as a laborious process requiring knowledge of this complex environment and many application programs. We have addressed this problem by developing a web-based approach for exploring simulation results via the automatic generation of metadata summaries which provide convenient access to the data sets and associated analysis tools. In this paper we will describe the SimTracker tool for automatically generating metadata that serves as a quick overview and index to the archived results of simulations. The SimTracker application consists of two parts - a generation component and a viewing component. The generation component captures and generates calculation metadata from a simulation. These metadata include graphical snapshots from various stages of the run, pointers to the input and output files from the simulation, and assorted annotations describing the run. SimTracker generation can be done either during a simulation or afterwards. When integrated with a code system, SimTracker does its work on the fly, allowing the user to monitor a calculation while it is running. The viewing component of SimTracker provides a web-based mechanism for both quick perusing and careful analysis of simulation results. HTML is created on the fly from a series of Perl CGI scripts and metadata extracted from a database. A variety of views are provided, ranging from a high-level table of contents showing all of one's simulations, to an in-depth results page from which numeric values can be extracted and analysis tools can easily be launched. Annotations can be associated with a calculation at any time, allowing an end-user to customize the summary pages with titles, abstracts, and pointers to related information, for example. In this paper, we will present an overview of the design, implementation, and operational aspects of the SimTracker application. We will also discuss how it is being deployed in the environment of the Accelerated Strategic Computing Initiative [1]. SimTracker was designed as an extensible application that we are now adapting to use with several simulation codes
Recommended from our members
Mining scientific data archives through metadata generation
Data analysis and management tools typically have not supported the documenting of data, so scientists must manually maintain all information pertaining to the context and history of their work. This metadata is critical to effective retrieval and use of the masses of archived data, yet little of it exists on-line or in an accessible format. Exploration of archived legacy data typically proceeds as a laborious process, using commands to navigate through file structures on several machines. This file-at-a-time approach needs to be replaced with a model that represents data as collections of interrelated objects. The tools that support this model must focus attention on data while hiding the complexity of the computational environment. This problem was addressed by developing a tool for exploring large amounts of data in UNIX directories via automatic generation of metadata summaries. This paper describes the model for metadata summaries of collections and the Data Miner tool for interactively traversing directories and automatically generating metadata that serves as a quick overview and index to the archived data. The summaries include thumbnail images as well as links to the data, related directories, and other metadata. Users may personalize the metadata by adding a title and abstract to the summary, which is presented as an HTML page viewed with a World Wide Web browser. We have designed summaries for 3 types of collections of data: contents of a single directory; virtual directories that represent relations between scattered files; and groups of related calculation files. By focusing on the scientists` view of the data mining task, we have developed techniques that assist in the ``detective work `` of mining without requiring knowledge of mundane details about formats and commands. Experiences in working with scientists to design these tools are recounted
Recommended from our members
From Petascale to Exascale: Eight Focus Areas of R&D Challenges for HPC Simulation Environments
Programming models bridge the gap between the underlying hardware architecture and the supporting layers of software available to applications. Programming models are different from both programming languages and application programming interfaces (APIs). Specifically, a programming model is an abstraction of the underlying computer system that allows for the expression of both algorithms and data structures. In comparison, languages and APIs provide implementations of these abstractions and allow the algorithms and data structures to be put into practice - a programming model exists independently of the choice of both the programming language and the supporting APIs. Programming models are typically focused on achieving increased developer productivity, performance, and portability to other system designs. The rapidly changing nature of processor architectures and the complexity of designing an exascale platform provide significant challenges for these goals. Several other factors are likely to impact the design of future programming models. In particular, the representation and management of increasing levels of parallelism, concurrency and memory hierarchies, combined with the ability to maintain a progressive level of interoperability with today's applications are of significant concern. Overall the design of a programming model is inherently tied not only to the underlying hardware architecture, but also to the requirements of applications and libraries including data analysis, visualization, and uncertainty quantification. Furthermore, the successful implementation of a programming model is dependent on exposed features of the runtime software layers and features of the operating system. Successful use of a programming model also requires effective presentation to the software developer within the context of traditional and new software development tools. Consideration must also be given to the impact of programming models on both languages and the associated compiler infrastructure. Exascale programming models must reflect several, often competing, design goals. These design goals include desirable features such as abstraction and separation of concerns. However, some aspects are unique to large-scale computing. For example, interoperability and composability with existing implementations will prove critical. In particular, performance is the essential underlying goal for large-scale systems. A key evaluation metric for exascale models will be the extent to which they support these goals rather than merely enable them
Recommended from our members
DOECGF 2010 Site Report
The Data group provides data analysis and visualization support to its customers. This consists primarily of the development and support of VisIt, a data analysis and visualization tool. Support ranges from answering questions about the tool, providing classes on how to use the tool, and performing data analysis and visualization for customers. The Information Management and Graphics Group supports and develops tools that enhance our ability to access, display, and understand large, complex data sets. Activities include applying visualization software for large scale data exploration; running video production labs on two networks; supporting graphics libraries and tools for end users; maintaining PowerWalls and assorted other displays; and developing software for searching and managing scientific data. Researchers in the Center for Applied Scientific Computing (CASC) work on various projects including the development of visualization techniques for large scale data exploration that are funded by the ASC program, among others. The researchers also have LDRD projects and collaborations with other lab researchers, academia, and industry. The IMG group is located in the Terascale Simulation Facility, home to Dawn, Atlas, BGL, and others, which includes both classified and unclassified visualization theaters, a visualization computer floor and deployment workshop, and video production labs. We continued to provide the traditional graphics group consulting and video production support. We maintained five PowerWalls and many other displays. We deployed a 576-node Opteron/IB cluster with 72 TB of memory providing a visualization production server on our classified network. We continue to support a 128-node Opteron/IB cluster providing a visualization production server for our unclassified systems and an older 256-node Opteron/IB cluster for the classified systems, as well as several smaller clusters to drive the PowerWalls. The visualization production systems includes NFS servers to provide dedicated storage for data analysis and visualization. The ASC projects have delivered new versions of visualization and scientific data management tools to end users and continue to refine them. VisIt had 4 releases during the past year, ending with VisIt 2.0. We released version 2.4 of Hopper, a Java application for managing and transferring files. This release included a graphical disk usage view which works on all types of connections and an aggregated copy feature for quickly transferring massive datasets quickly and efficiently to HPSS. We continue to use and develop Blockbuster and Telepath. Both the VisIt and IMG teams were engaged in a variety of movie production efforts during the past year in addition to the development tasks
International multicenter trial of bronchial valve treatment for severe emphysema
Please help us populate SUNScholar with the post print version of this article. It can be e-mailed to: [email protected] Geneeskund
The IBV Valve trial: a multicenter, randomized, double-blind trial of endobronchial therapy for severe emphysema.
BACKGROUND: Lung volume reduction surgery improves quality of life, exercise capacity, and survival in selected patients but is accompanied by significant morbidity. Bronchoscopic approaches may provide similar benefits with less morbidity.
METHODS: In a randomized, sham procedure controlled, double-blind trial, 277 subjects were enrolled at 36 centers. Patients had emphysema, airflow obstruction, hyperinflation, and severe dyspnea. The primary effectiveness measure was a significant improvement in disease-related quality of life (St. George\u27s Respiratory Questionnaire) and changes in lobar lung volumes. The primary safety measure was a comparison of serious adverse events.
RESULTS: There were 6/121 (5.0%) responders in the treatment group at 6 months, significantly \u3e1/134 (0.7%) in the control group [Bayesian credible intervals (BCI), 0.05%, 9.21%]. Lobar volume changes were significantly different with an average decrease in the treated lobes of -224 mL compared with -17 mL for the control group (BCI, -272, -143). The proportion of responders in St. George\u27s Respiratory Questionnaire was not greater in the treatment group. There were significantly more subjects with a serious adverse event in the treatment group (n=20 or 14.1%) compared with the control group (n=5 or 3.7%) (BCI, 4.0, 17.1), but most were neither procedure nor device related.
CONCLUSIONS: This trial had technical and statistical success but partial-bilateral endobronchial valve occlusion did not obtain clinically meaningful results. Safety results were acceptable and compare favorably to lung volume reduction surgery and other bronchial valve studies. Further studies need to focus on improved patient selection and a different treatment algorithm.
TRIAL REGISTRY: ClinicalTrials.gov NCT00475007